![]() Method and apparatus for resource balancing in an automation and alarm architecture
专利摘要:
A method and system architecture for automation and alarm systems is provided. According to exemplary embodiments, relatively simple processing tasks are performed at the sensor level, with more complex processing being shifted to the gateway entity or a networked processing device. The gateway entity dynamically allocates processing resources for sensors. If a sensor detects than an event is occurring, or predicts that an event is about to occur, the sensor submits a resources allocation request and a power balancer running on the gateway entity processes the request. In response to the resources allocation request, the gateway entity allocates some processing resources to the requesting sensor and the data is processed in real-time or near-real-time by the gateway entity. 公开号:ES2646632A2 申请号:ES201790010 申请日:2015-10-02 公开日:2017-12-14 发明作者:Andrei Bucsa;Gregory W. Hill 申请人:Tyco Safety Products Canada Ltd; IPC主号:
专利说明:
2 DESCRIPTION Method and apparatus for balancing resources in an automation and alarm architecture Cross-reference to the related application This application claims priority of US Provisional Patent Application No. 62/059. 410 filed on October 3, 2015, and titled “Wireless Security and 5 Home Automation”. The entirety of this application is incorporated herein by reference. Field of description The description refers to the field of automation and alarm systems, and more particularly to methods and apparatus for balancing resources in a system architecture for a system of automation or alarm Background of the description Automation and alarm systems, such as home automation systems, fire alarm systems and security systems, typically include one or more gateway entities, for example, alarm panels, which receive information from various sensors 15 distributed across a structured area. In response to particular types of input signals, the sensors or the gateway entity sometimes trigger an action through an output device. For example, a typical fire alarm system includes one or more sensors, for example, smoke detectors or manually operated extraction stations, etc. , and output devices, for example, strobe lights, 20 sirens, public announcement systems, etc. , operatively connected to a gateway entity. In some traditional automation and alarm systems, a sensor includes processing capacity to process sensor data. For example, a sensor monitors electrical signals gathered by the sensor detection device for variations that represent the occurrence of an alarm condition. For data recording and data analytics purposes, the sensor forwards information to the gateway entity, and the gateway entity in turn forwards the data to a cloud processing device. The cloud processing device gathers data from many sensors and / or gateway entities, analyzes the data, and generates reports 30 The processing of the data in the sensor requires the sensor to consume energy, which can be problematic, especially if The sensor is powered by batteries. Further,3 Individual sensors are relatively complex and expensive, because the sensor must be provided with sufficient processing resources so that the sensor can process its own data in isolation. During periods of inactivity, these processing resources are not used and, therefore, are wasted. Furthermore, if a new update is developed for the algorithm that processes the sensor data, each sensor must receive and process the update. This update process can be complicated and may take a long time in an environment with many sensors. Compendium This description addresses these and other problems with conventional alarm and automation systems. According to the exemplary embodiments, relatively simple processing tasks are performed at the sensor level, with a more complex processing moving to the gateway entity. Other processing tasks that require greater processing power, for example, data analytics, are sent to a network processing device, for example, a cloud processing device) and / or to another third-party device. In this way, a hierarchy of 15 processing capabilities is provided, with the sensors forming a lower level, the gateway entity forming an intermediate level, and the cloud / third-party processing devices forming a higher level. The gateway entity dynamically allocates memory or processing resources to the sensors. If a sensor detects that an event is occurring, or predicts that an event is about to occur, the sensor submits a resource allocation request and a power balancer that operates at the gateway entity processes the request. In response to the request for resource allocation, the gateway entity allocates some processing resources to the requesting sensor. By balancing the processing power between the sensors and the gateway entity, the 25 sensors can be made simpler and less expensive, due to the decrease in the required processing power. In addition, improved capabilities, such as processing voice command data received through the sensors, can be provided by the system, because the sensors do not need to support the entire load or process complex data. Furthermore, new features can be deployed, and 30 old features can be updated, modifying only the processing logic in the gateway entity, instead of the processing logic of each of the multiple sensors. Brief description of the drawings4 By way of example, specific exemplary embodiments of the system and method described will now be described, with reference to the accompanying drawings, in which: FIG. 1 is a block diagram illustrating an exemplary system architecture according to the present description. FIG. 2 represents a hierarchy of processing devices in the architecture of the system. FIG. 3 is a block diagram illustrating an exemplary detection device or output device according to the present description. FIG. 4 is a block diagram illustrating an exemplary gateway entity according to the present description. 10 FIG. 5 is a system context diagram illustrating exemplary interactions between the devices of the system architecture from the perspective of the gateway entity according to the present description. FIG. 6-10B represent exemplary data structures suitable for use according to the present description. 15 FIG. 11 is a data flow diagram illustrating exemplary data flows through the system architecture according to the present description. FIG. 12 is a flow chart representing an exemplary method performed by a sensor device according to the present description. FIG. 13 is a flow chart representing an exemplary method performed by a gateway entity according to the present description. FIG. 14A-14B are exemplary processing flow charts that represent the processing steps performed in an exemplary interactive voice service according to the present description. Detailed Description 25 This description refers to a system architecture for automation and alarm systems, for which a hierarchy of processing capabilities is defined. Unlike conventional systems in which the majority of sensor data processing is handled by the respective sensors, the exemplary system architecture moves the processing tasks within the hierarchy in order to conserve resources, 305 perform load balancing, and assign processing tasks to the devices that are most suitable for performing them. FIG. 1 represents an example of such system architecture 10. The system architecture 10 of FIG. 1 is intended to be illustrative only, and one skilled in the art will recognize that the embodiments described below can be employed in a system architecture that has more, less, and / or different components than the system architecture 10 of FIG. . one. The system architecture 10 includes a monitored zone 12. The monitored zone 12 represents a logical grouping of monitored devices, and may or may not correspond to a physical location defined by physical boundaries, for example, a room or a building. 10 The monitored zone 12 represents, for example, part or all of a residential house, a company, a school, an airport, etc. Exemplary monitored zone 12 includes a series of sensors, sensor 14 and sensor 16. The sensors include devices that measure or detect a physical property, such as temperature, pressure, presence of light or smoke, or the position of a switch. A sensor 15 translates the physical property to an electrical signal, for example, using a transducer. Examples of sensors include environmental sensors, for example, temperature sensors, pressure sensors, humidity sensors, light level sensors, etc. , status sensors, for example, door and window switches, smoke detectors, motion detectors, valve status detectors, level indicators, 20 flow level indicators, etc. , health sensors, for example, heart rate sensors, blood flow sensors, sugar level sensors, body temperature sensors, etc. , location sensors, for example, transmitters of the Global Positioning System or other location-based sensors placed on people, animals, properties, etc. , as well as general purpose or multipurpose sensors, for example, microphones, cameras, manual traction switches, etc. . Exemplary monitored zone 12 also includes an output device 18. Output devices include devices that provide an output signal, such as sound, light, vibration, or an instruction to take an action, in response to a condition. The condition that causes the output device to provide the output signal 30 may be, for example, the detection of a particular output of a sensor, for example, the sensor signal that falls below or rises above a value. predefined threshold, or the detection of a predefined pattern in the sensor data, or a trigger message sent to the output device by another device. 6 Examples of output devices include notification devices such as loudspeakers, strobe lights, a motor that induces vibration in a mobile device, etc. Some types of notification devices are configured to provide a perceptible output by a human, for example, a notification device that provides a visual, auditory, haptic, or other perceptible output by a human, while other types 5 are configured to provide a perceptible exit by a machine, for example, a silent alarm that transmits a notification of a security incident to a server in a security company, or a fire phone booth that sends an alert to a fire station. Other examples of output devices include devices that control another 10 devices or objects. Examples of such output devices include devices that open or close a door, turn a light on or off, adjust a heating, ventilation, or air conditioning (HVAC) device, etc. A gateway entity 20 monitors and controls the sensors 14, 16 and the output device 18 of the monitored zone 12. Gateway entities include devices that manage or monitor the devices of a monitored zone 12, and that optionally communicate with devices outside the monitored zone 12. The exemplary gateway entity 20 processes the input data received from the sensors 14, 16 determines if the sensor data indicates that an action, such as raising an alarm, should be taken, and triggers the output device 18. Examples of gateway entities 20 include dedicated control panels 20 and local computing devices such as personal computers or local servers. The gateway entity 20 can be deployed in the monitored zone 12, located near the monitored zone 12, or remotely located, while remaining communicatively connected to, the monitored zone 12. 25 The embodiment of FIG. 1 includes a single monitored zone 12 controlled by a single gateway entity 20. In other embodiments, multiple supervised areas can be controlled by different gateway entities or the monitored areas can be collectively monitored by a single gateway entity. The sensors 14, 16 and the output device 18 are in communication with and operatively connected to the gateway entity 20. The connection can be a wireless connection, for example, via Wi-Fi or a low-power short-range radio communication technology or a wired connection, for example, via wiring7 copper or fiber optic communications, or through a power line network. The gateway entity 20 communicates with the remote entities through a network 22. A network 22 is a collection of two or more nodes and links between the nodes that allow the communicated information to be passed between the nodes. A network 22 may be wired or wireless. Examples of a network 22 include computer networks, such as the Internet, a local area network, or a metropolitan area network, and telephone networks such as land telephone exchanges and wireless telecommunications networks. There is a critical timing path 24 between the gateway entity on one side, and the sensors 14, 16 and the output device 18 on the other side. The critical timing path 24 carries time-sensitive data, for example, sensor data that may be indicative of the occurrence of an event, such as a fire, raid, or medical emergency that are designated for real-time or near-real-time processing. in real time as the data is being generated. Because the data of the sensor 14 is time sensitive and is being processed away from the sensor 14 that generated the data, the communication path (s) for exchanging information between the sensor 14 and 15 the processing device, the gateway entity 20, in this example, must be robust and relatively fast. In the embodiment shown, there is a non-critical timing path 26 between the gateway entity 20 and the network 22. In this embodiment, the devices on the other side of the network 22 of the gateway entity; that is, the cloud processing device 28 and the third party service 30, perform relatively complex time-sensitive processing tasks. For example, the cloud processing device 28 performs tasks such as data aggregation and recording, reporting, and data analytics. Because these calculations are not time sensitive and do not need to be performed in real time as the data is generated, the communication path between the gateway entity 20 and the network 22 is considered a non-critical timing path 26 . However, in some embodiments, the connections between the gateway entity 20 and other devices in the architecture 10 can be designated as critical timing paths, depending on the application. The devices of the system architecture 10, including the gateway entity 20, the sensors 14 and 16, and the output device 18 include some amount of processing power. A cloud processing device 28 increases the processing capabilities of the other devices in architecture 10. A cloud processing device 28 is a device that is accessible to the gateway entity 20 at8 through the network 22 and which provides additional processing capabilities that can be invoked by the gateway entity 20 or other device in the system architecture 10 in order to perform processing tasks. A third party service 30 receives information from the gateway entity 20 and / or the cloud processing device 28. A third party service 30 is an entity that receives information about the status of the sensors and / or monitored areas in the architecture 10 and is normally different from the entity that owns or controls the devices of the monitored zone 12. The third party service 30 may, but not necessarily, be operated by the same entity that operates the cloud processing device 28. The third party service 30 can take an action in response to the information, such as registering the information for future use, adding the information to other information to generate a report, recognizing emergencies, and dispatching the first responders to the monitored zone 12. Examples of third-party services include security companies, fire stations, medical offices and hospitals, and data storage centers. 15 According to the exemplary embodiments, the devices of the system architecture 10 are organized in a hierarchy for purposes of sensor data processing, updating a system state, and triggering output devices, among other possibilities. FIG. 2 illustrates an example of a hierarchy 32 of devices in system architecture 10. 20 At a lower level 34 of hierarchy 32, the sensors and output devices are grouped together. Sensors and output devices typically have limited processing capabilities and limited power, and therefore are poorly suited for complex processing tasks. However, such devices can be relied on to perform relatively simple processing tasks. In addition, these devices are typically deployed in a specific context and / or invoked to monitor a very particular type of input. For example, a glass break sensor is a type of sensor that uses a microphone to record sound, for example, in the immediate vicinity of a window, which is then analyzed in order to detect a predetermined pattern or signal indicative of the sound of glass breakage. Even if the glass break sensor has only limited processing capabilities, those capabilities can be used to detect relatively simple glass break patterns, thereby reducing the need to process all the sound data of the glass break sensor. in the gateway entity 20. 9 If a device at lower level 34 of hierarchy 32 is unable to process some input data or is not configured to do so, the device forwards the data to a device at intermediate level 36 of hierarchy 32. The intermediate level 36 includes gateway entities, such as control panels, local computing devices, and in some situations mobile devices such as cell phones and tablets. Such 5 devices typically have improved processing and power capabilities compared to devices at lower level 34, which makes them very suitable for most processing tasks. Intermediate level 36 devices can perform more general purpose analyzes, unlike special purpose analyzes performed at lower level 34, and / or perform more complex analyzes compared to lower level 34. Devices at intermediate level 36 may occasionally become overwhelmed in the presence of many requests for data processing, or they may find a processing task that is beyond their capabilities. In this case, processing tasks can raise the hierarchy to the upper level 38. At level 15 38 above, cloud and third-party processing devices perform complex tasks on behalf of the system. Devices at different levels of hierarchy 32 and, different devices at the same level of hierarchy 32, may include different logic to process the same data. For example, a smoke detector at lower level 34 and a gateway entity at intermediate level 36 both may have logic to analyze the smoke detector data to determine if there is a fire in the monitored area. However, the logic of the gateway entity may be more sophisticated than the logic of the smoke detector. In this way, the smoke detector and the gateway entity could process the same data and reach different conclusions. This capability can be advantageously used to provide a targeted and sophisticated analysis of the data. If a device at a lower level of the hierarchy processes data and determines that almost, but not quite, it indicates the presence of an alarm condition, for example, the processing results do not exceed an alarm threshold, but approach the threshold within a predefined tolerance, then the lower level device can forward the data to another device in the architecture that has a more sophisticated or different processing capacity. In addition, different devices at the same level of hierarchy 32 may have different logic for processing data. Therefore, different devices can be made10 employ location-dependent or context-sensitive processing logic. For example, a smoke detector deployed in a kitchen can be equipped with logic to eliminate false alarms due to cooking, while a smoke detector deployed in a front hall can omit this logic. The logic displayed on a device may be dependent on the hardware configuration 5 of the device. For example, a sensor that has new or improved hardware can display more complex or specialized processing logic compared to an older or simpler sensor. In addition to providing location or context-sensitive processing, this capability allows a device at a level of hierarchy 32 to forward data to another more specialized device, possibly through a gateway entity 10, when presented with data that can Be handled better by the specialized device. In addition to improved processing, another advantage of hierarchy 32 is that improved configuration settings can be developed at the upper levels of hierarchy 32, for example, intermediate level 36 and upper level 38, and push down to levels 15 lower hierarchy 32. For example, if a sensor at lower level 34 determines that the input data almost, but not quite, rises to the level of an alarm condition, the sensor can forward the input data to a device at intermediate level 36 for additional processing. If the device at intermediate level 36 determines that the data should have triggered an alarm condition, the device at intermediate level 36 20 may review the configuration of the device at lower level 34 to determine whether one or more settings should be changed. configuration so that the lower level device can better analyze the input data in the future. For example, the device at the intermediate level could lower the alarm threshold of the lower level device, or it could alter the algorithm employed by the lower level device 25 based on the algorithm used by the intermediate level device or another device in the architecture 10. Exemplary device structures in the hierarchy, in particular an exemplary sensor 14 and an exemplary gateway entity 20, are now described with reference to FIG. 3 and 4. 30 The sensor 14 shown in FIG. 3 includes a detector 40. The detectors include devices that measure or identify a phenomenon and provide an output in response to the presence of the phenomenon, the absence of the phenomenon, or a change in the phenomenon. Examples of detectors include light or image sensors, microphones,11 thermometers / thermocouples, barometers, etc. The output of the detector 40 is processed by a processor 42. The processors 42 include devices that execute instructions and / or perform mathematical, logical, control, or input / output operations. The processor 42 of the sensor 14 may be a specialized processor that has limited processing capabilities and is designed to operate in low power environments. For example, processor 42 of sensor 14 may implement the Reduced Instruction Set Computing (RISC) or Acorn RISC Machine (ARM) architecture. Examples of processors 42 include the AtomTM family of processors from Intel Corporation of Santa Clara, California, the family of A4 processors from Apple, Inc. from Cupertino, California, the 10 SnapdragonTM family of processors from Qualcomm Technologies, Inc. from San Diego, California, and the Cortex® family of processors from ARM Holdings, PLC from Cambridge, England. The processor 42 may also be a custom processor. The sensor 14 includes a power interface 44 for supplying electrical power to the sensor components 14. The power interface 44 may be a connection to an external power source 15, such as a wired connection to the power source of a home or business. Alternatively or additionally, the power interface 44 may include an interface to a rechargeable or non-rechargeable battery, or a capacitor. The exemplary sensor 14 is coupled in wireless and wired communication. Accordingly, the sensor 14 includes a communication interface 46 for managing the communication between the sensor 14 and other entities in the architecture 10. The communication interface 46 accepts incoming transmissions of information from the other entities of the architecture 10, manages the transmission of information from the sensor 14 to the other entities, and provides quality control for data transmissions, among other functionalities related to the communication. The sensor 14 can be connected to the network 22 through the communication interface 46. The communication interface 46 communicates wirelessly with the other entities of the architecture 10 using a radio transmitter / receiver 48. The radio transmitter / receiver 48 modulates and demodulates electromagnetic signals transported wirelessly through a medium, such as air or water, or through any medium such as in space. In exemplary embodiments, the radio transmitter / receiver 48 of the sensor 14 may be a specialized radio transmitter / receiver that communicates over a relatively short range using a relatively low power. Examples of lower power radio transmitters / receivers 48 include devices that are12 communicate via ultra high frequency (UHF) shortwave radio waves. Exemplary low power radio receiver transmitters 48 can implement a communication protocol such as a ZigBee protocol from the ZigBee Alliance, the Bluetooth® Low Energy (BLE) protocol of the Bluetooth Special Interest Group, the Z-Wave protocol of the Z-Wave Alliance, the IPv6 protocol on 5 Low Power Wireless Personal Area Networks (6LoWPAN) developed by the Internet Engineering Working Group (IETF), or a near field communications protocol (NFC). Alternatively or additionally, the sensor 14 could be coupled in wireless communication using other transmission / reception technologies, such as optical, sonic or electromagnetic induction in free space. The exemplary communication interface 46 is also connected to a network interface 50 to interface with a wired communication network. The network interface 50 may be, for example, a network interface controller (NIC) for establishing a wired connection to a computer network such as the Internet, a fiber optic interface for connecting to a fiber optic network, an interface cable to connect to a cable television network, a telephone jack to connect to a telephone network, or a power line interface to connect to a power line communications network. Optionally, the sensor 14 may include an output device 18. For example, a smoke detector may include a sensor to detect the presence of smoke, and one or more 20 output devices, for example, a siren and a strobe, which are triggered based on the sensor's output. The sensor 14 includes a memory 52 for containing data, instructions, and other information for use by the other sensor components. In exemplary embodiments, the memory 52 of the sensor 14 may be a specialized memory that includes relatively limited storage 25 and / or uses a relatively low power. Memory 52 may be a solid state storage medium such as fast memory and / or random access memory (RAM). Examples of memory 52 include a Secure DigitalTM (SD) memory of the SD Association. Memory 52 can also be a personalized memory. The memory 52 includes a temporary data store 54 for temporarily storing data 30 of the detector 40 until the data can be processed by the processor 42 or transmitted using the communication interface 46. The temporary data store 54 may be, for example, a circular temporary store. The data in the13 temporary data store 54 can be processed first to enter first out (FIFO), in a final way first to enter out (LIFO), based on the importance of individual data units in the temporary store , or based on a custom processing order. The temporary data store 54 can be placed in a fixed location in memory 52. In addition to the temporary data store 54, the memory 52 includes a network temporary store 56 for storing information transmitted or received through the communication interface 46. The processor 42 assembles the data for transmission over the communication interface 46, and stores the data units in the network temporary store 56. The communication interface 46 regularly retrieves pending data from the network temporary storage 56 and transmits it to its destination. Upon receiving the data from another device of the architecture 10, the communication interface 46 places the data in the network temporary store 56. The processor 42 regularly retrieves the pending data from the network temporary storage and processes the data according to the instructions stored in the memory 52, or encoded in the processor 42. In order to distinguish between the received data and the data to be transmitted, the network temporary store 56 can be subdivided into a temporary store "inside" and a temporary store "outside". The network temporary store 56 can be placed in a fixed location in memory 52. Memory 52 further stores a configuration 58 that includes rules 60, filters 62, processing logic 64, and configuration parameters 66. A configuration 58 is a description of hardware and / or software present in a device. Rules 60 describe one or more actions that occur in response to one or more conditions. The filters 62 are logic that is executed in input data and / or processed in order to determine a next action to be taken with the data, such as processing the data locally, saving the data in a record, or resending the data to Another device for processing. The processing logic 64 provides instructions and / or parameters that operate on input data, or, in some examples, without input data, to generate new output data, transform the input data into new data, or take an action with regarding the input data or some other data. The processing logic 64 can be applied to the data generated by the detector 40 in order to take an action, such as raising an alarm, changing a security or monitoring state of the architecture 10, operating an output device, etc. The gateway entity 20 can be deployed with different types of processing logic 64, the different types being specialized respectively in different types of sensor data. Configuration parameters 66 include values 3514 for settings that describe how the hardware and / or software of the configured device operates. Configuration 58, rules 60, filters 62, processing logic 64 and configuration parameters 66 are described in more detail in relation to FIG. 7-10B, below. At least one communication link that the sensor 14 uses to communicate, for example, the link with the gateway entity 20, is a critical timing path 24. As noted above, the critical timing path 24 carries time sensitive data between, for example, the sensor 14 and the gateway entity 20. Accordingly, the critical timing path 24 may require real-time or near real-time data transmission. The processor 42, the communication interface 46, and the memory 52 of the sensor 10 can be configured to prioritize time sensitive data so that the data is processed, transmitted, and acted quickly. For example, the time sensitive data of the detector 40 can be marked with a special header by the processor 42. The time-sensitive data can be analyzed, using the processing logic 64, in an accelerated manner, for example, the time-sensitive data can be prioritized over 15 other non-time-sensitive data. If the time sensitive data is to be processed on another device in hierarchy 32, the time sensitive data can be placed in a high priority segregated area of the network temporary storage 56 or sent directly to the communication interface 46 for transmission. Communication interface 46 may attach a special header to time sensitive data, marking the data as high priority. Upon receiving data with a high priority header, the communication interface 46 may place the data in the high priority area of the network temporary storage 56 and / or announce the arrival of the data to the processor 42. The communication interface 46 establishes one or more high priority communication channels between the sensor 14 and the gateway entity 20. The high priority communication channels can be broadband channels to ensure that there is sufficient bandwidth to transmit the data in real time or almost in real time. High priority communication channels can be redundant, so that backup channels are available if one of the communication channels ceases to function. Redundancy can be established using multiple types of transmission media, for example, wired and wireless media, different types of wired media, etc. , using different routes between the sensor 14 and the gateway entity 20, for example, a direct communication route to the gateway entity 20 and an indirect route from the sensor 14 through another device, such as another sensor 16, to the gateway entity 20, establishing alternative processing devices, for example, establishing a15 communication channel to the cloud processing device 28, in the event that the gateway entity is unreachable or unable to process sensor data, and other possibilities. Similar procedures can be used for time sensitive data in the gateway entity 20. 5 The sensor 14 shown in FIG. 3 communicates primarily with the gateway entity 20, which may be similar to the sensor 14 in terms of the types of components used. However, because there are fewer restrictions on the gateway entity 20 in terms of size, location, and energy consumption, the gateway entity 20 may have more components and / or more powerful components than the sensor 14. Typically, the gateway entity 20 is a panel or a computing device located in or near the monitored zone 12. FIG. 4 is a block diagram representing the structure of an exemplary gateway entity 20. The gateway entity 20 includes a processor 42. The processor 42 of the gateway entity 20 may be similar to the processor 42 of the sensor 14; alternatively or additionally, the processor 42 of the gateway entity 20 may be a Central Processing Unit (CPU) having one or more processing cores, one or more coprocessors, and / or cache memory on the chip. In some embodiments, the processor 42 of the gateway entity 20 may be a specialized processor that has improved processing capabilities in comparison to the processor 42 of the sensor 14 and, as a result, may present an increase in energy consumption and / or heat generation compared to processor 42 of sensor 14. For example, the processor 42 of the gateway entity 20 may implement the Complex Instruction Set Computing (CISC) architecture. Examples of processors 42 include the Celeron®, Pentium®, and 25 CoreTM processor families of Intel Corporation of Santa Clara, California, and the Accelerated Processing Unit (APU) and Advanced Central Processing Unit (CPU) processors of Advanced Micro Devices (AMD), Inc. from Sunnyvale, California. The gateway entity 20 further includes a power interface 44. The power interface 44 can be connected directly to the power distribution system or to the power grid 30 at the location where the gateway entity 20 is deployed. The power interface 44 may include an interface to accept alternating current (AC), direct current (DC), or both. The power interface 44 may include a converter to convert AC into16 DC, or vice versa. The power interface 44 may include a backup battery to operate the gateway entity 20 during power outages. The gateway entity 20 includes a communication interface 46, a radio 48, and a network interface 50 similar to the respective components of the sensor 14. It can be expected that the gateway entity 20 communicates with more devices than the sensor 14 and, consequently, can be provided with more or more complex communication interfaces 46, radios 48, and network interfaces 50 than the sensor 14. The gateway entity 20 can be assigned to a particular monitored zone 12 and, consequently, can maintain communication with the devices in the monitored zone 12 through the communication interface 46. The gateway entity 20 can also be connected to the network 22 through the communication interface 46. The gateway entity 20 includes a memory 52. The memory 52 of the gateway entity 20 may be similar to the memory 52 of the sensor 14, but typically has greater storage space and / or improved performance, such as improved read / write times, improved search times, and / or improved data redundancy or information backup capabilities. Examples of a memory 52 suitable for use in the gateway entity 20 include a random access memory (RAM), a hard disk drive (HDD), or a solid state drive (SSD), among other possibilities, or a combination of the same or different types of storage devices. 20 Memory 52 provides a network temporary store 56 similar to network temporary store 56 of sensor 14. The memory 52 also includes a storage area for sensor data 70, which includes sensor data from the sensors in the monitored zone 12 monitored by the gateway entity 20, for example, data 72 of the first sensor, data of the second sensor , etc. . The data 70 of the sensor can be stored in a separate partition of memory 52 compared to other elements stored in memory 52. The memory 52 of the gateway entity 20 also stores a configuration 58, rules 60, filters 62, processing logic 64, and gateway entity configuration parameters 74. These elements may be similar in structure to the respective elements 30 of the sensor 14, although they may differ in content, for example, different conditions and actions in the rules 60, different ways of filtering the data in the filters 62, different instructions in the logic 64 of processing, different values in configuration parameters 74, etc. . 17 The gateway entity 20 further includes a balancer 68, which performs a double balancing process to allocate resources, for example, processing resources, memory resources, etc. , in architecture 10. Balancer 68 receives event notifications when a sensor determines that an event is taking place or is about to take place. In response, balancer 68 allocates resources to the gateway. For example, balancer 68 may allocate space in memory 52 for data 72 of sensor 14 and / or may generate new threads or otherwise allocate processing resources within processor 42. The double balancing process is described in more detail in Figures 5 and 11-14B. As noted above, the gateway entity 20 forwards some data to a cloud processing device 28 for further processing. The cloud processing device 28 has a structure similar to that of the gateway entity 20. In order to avoid redundancy, the structure of the cloud processing device 28 is not shown separately. The cloud processing device 28 can be deployed in a manner that allows improved components qualitatively and quantitatively, compared to the gateway entity 20. For example, the memory of the cloud processing device 28 may include several hard disk drives (HDD) or solid state drives (SDD), among other storage possibilities. The memory of the cloud processing device 28 may be arranged in a redundant formation of independent disk configuration (RAID) for improved reliability and performance. In addition, the processor the cloud processing device 28 may be qualitatively or quantitatively more powerful than the processor 42 of the gateway entity 20. For example, multiple processors 42 may be provided in the cloud processing device 28, which may include more processing cores than the processor 42 of the gateway entity 20. In addition, the processor (s) 42 of the cloud processing device 28 may be of a different type and more powerful than the processor 42 of the gateway entity 20. For example, the cloud processing device 28 may employ a more powerful central processing unit (CPU) than the gateway entity 20, or it may employ more or better coprocessors than the CPU 30 of the gateway entity 20, or it may employ a graphics processing unit (GPU) that is more powerful than the gateway entity CPU 20. As shown in FIG. 5, sensor 14, gateway entity 20, cloud processing device 28, and third-party service 30 can interact with each other, and with18 other elements of architecture 10, in order to process sensor data. FIG. 5 is a system context diagram showing how, in an exemplary embodiment, the entities of the system architecture 10 interact with each other according to a double balancing process 76. The double balancing process 76 encompasses the steps or actions performed by the architecture 10 in order to balance the data of the processing sensor and manage the entities of the architecture 10. The double balancing process 76 includes the actions described in more detail in the flowcharts of FIG. 11-14B. The sensor 14 generates input data for the double balancing process 76 using the detector 40. The input data is stored in the temporary data store 54 of the sensor until it can be processed by the processor 42. The processor 42 retrieves the data from the temporary data store 54 and makes an initial determination, based on a filter 62, either to process the data locally or to forward the data to another device in the architecture 10 for processing. If the data is processed locally and the sensor determines that an event is occurring or is expected to occur, for example, an alarm condition 15 is indicated or considered likely, the sensor 14 generates, as output from the balancing process 76 double, an event notification. An event notification signals the occurrence or prediction of the event for the gateway entity 20, so that the balancer 68 of the gateway entity 20 can begin to allocate resources and respond to the event. An event notification message may indicate that the event is occurring or is likely to occur, or the event notification may indicate that a previously marked event has ended. In some embodiments, the notification of events includes characteristics of the sensor 14, such as initial data of the sensor 14, information about the configuration of the sensor 14, for example, details about the microprograms, software, hardware, etc. , a sensor model identification 14, the type of sensor 14, for example, smoke detector, glass breakage sensor, etc. , or maintenance information, for example, resistance measurements across various points in the sensor circuitry 14, battery level measurements or sensor network connectivity 14, sensor power consumption 14, etc. . If the processor 42 determines that the data cannot or should not be processed 30 locally, then the sensor 14 generates, as output to the double balancing process 76, a message that includes the unprocessed data for processing by another device in the architecture 10 . The unprocessed data includes data, for example, data generated by the sensor 14, which are designated by the double balancing process 76 to process by a19 device other than the device where the unprocessed data currently resides. Unprocessed data may include data that is partially processed by the device where the unprocessed data currently resides. For example, the sensor 14 can perform a partial processing of the data, and forward some or all of the raw data, together with the results of the processing, to the double balancing process 76 as unprocessed data. In other embodiments, unprocessed data can be completely processed by sensor 14, but can be forwarded to another device for further consideration. In some embodiments, sensor 14 records data with detector 40 which is used for sound and voice recognition. For example, the detector 40 can receive speech data 10 as input and either process the speech data locally with the processor 42, or forward the speech data to the double balancing process 76 as unprocessed data. Speech data can be used for speech recognition and / or authentication in architecture 10. For example, speech data can be used to authenticate a user when the user enters the monitored zone 12. If the user stops authenticating, the sensor 14 can send an event notification to trigger an alarm condition indicating the presence of an unauthorized user in the monitored zone 12. The sensor 14 receives, as output from the double balancing process 76, triggers and remote access requests. A trigger is an instruction of another device, such as the gateway entity 20, to change the state of the sensor or take an action, for example, sounding an alarm, lighting a strobe, or playing an audio recording, in response to the detection of an event from the other device. For example, a trigger can inform sensor 14 that the architecture 10 is in an alarm configuration, and the internal rules of sensor 14 can provide a particular type of notification in response, for example, firing an integrated output device 18 of sensor 14. Alternatively or additionally, the trigger message may instruct sensor 14 to perform a task such as sounding an alarm or changing a temperature setting in the monitored zone 12. A remote access request is a request from a device of the architecture 10 that directs the sensor 14 to give control over some or all of the sensor components 14 30 to an external entity. For example, a remote access request could order the sensor 14 to provide visual data audio from the temporary data store 54 and / or the sensor detector 40. The remote access request could also order the sensor 14 to cede control of an output device, such as a loudspeaker, so that an entity20 can communicate remotely through the output device and / or a detector such as a microphone. The remote access request could also request that the sensor 14 reposition itself as in the case of a remotely operated camcorder that takes a panoramic view, tilts, or rotates in response to orders from the external entity, among other possibilities. 5 Third-party service 30 monitors the status of the devices and zones of architecture 10 for conditions that require additional action such as dispatching emergency services or contacting a user. Accordingly, the third-party service 30 is provided with, as an output of the double-balancing process 76, status updates indicative of any change in the security or monitoring status of the architecture 10. In addition, the third-party service 30 may receive processed or unprocessed data as an output of the double balancing process 76, which may allow the third-party service 30 to monitor the status of the sensors in the monitored zone 12. For example, third-party service 30 may receive raw data from a sensor such as a smoke detector, and / or processed data that summarizes the events that led to an alarm condition, among other possibilities. Based on status updates and processed or unprocessed data, third-party service 30 may contact, or may facilitate contact with, the first responders such as medical services, fire stations or police stations, a company private security, etc. Accordingly, the third party service 30 may provide, as input to the double balancing process 76, information from the first responder. The information of the first responder can include information such as selected parts of the processed or unprocessed data, which helps the first responders to handle an event. For example, the information of the first responder could include Global Positioning System (GPS) coordinates that indicate the location of a sensor 14 that is currently presenting an alarm condition. The information of the first responder may originate in the sensor 14, in the gateway entity 20, or in the cloud processing device 28, or it may be stored in the third-party service 30. The cloud processing device 28 provides additional processing capabilities 30 for the architecture 10. For example, the cloud processing device can perform more advanced processing using more complex algorithms than the gateway entity 20, it can support the gateway entity 20 by previously calculating information that simplifies the calculations performed by the gateway entity 20, by example,21 by generating an interaction dictionary that provides a limited number of options that the gateway entity 20 can select when engaged in automated voice interactions with a user, can perform advanced data analytics based on the data of one or more sensors , and / or may record data for future reference or to comply with applicable regulations. 5 In order to use these additional processing capabilities, the double balancing process 76 sends, as output, unprocessed data, and / or processed data, to be processed in the cloud processing device 28. Unprocessed and / or processed data can originate from a single sensor 14, or data from multiple sensors can be considered holistically. 10 The cloud processing device 28 processes the received data and makes a determination, for example, to change the security or monitoring status of the architecture 10, based on the data. Accordingly, the cloud processing device 28 may transmit, as input to the double balancing process 76, a status update describing how to change the state of the architecture 10 and / or a shot for a sensor 14 or device 18 output The cloud processing device 28 can also transmit "null" status update messages, indicating that the security or monitoring status of the architecture 10 need not be changed in response to the data. In addition, the cloud processing device 28 initiates calls to the monitoring stations and the first responders. For example, the cloud processing device 28 may send a signal to the third-party service 30 indicating the presence of an event such as a fire or break-in, and may direct the third-party service to call 911. Alternatively, the cloud processing device 28 can initiate a call to 911 and then handle the call on the third party service 30. 25 As noted above, an advantage of architecture 10 is that updates to the processing logic used to analyze the sensor data can be deployed on the gateway 20, without the need to push the new updates all the way to the sensors. In some embodiments, the cloud processing device 28 determines how the configuration of the gateway entity 20 should be updated. 30 For example, if the gateway entity 20 processes data and decides not to trigger an alarm, but the cloud processing device 28 determines that an alarm should have been triggered, the cloud processing device 28 can automatically send an update configuration to gateway entity 20 to22 decrease the detection thresholds of the gateway entity. Alternatively, if the cloud processing device 28 determines that an alarm should not have been triggered by the gateway entity 20, but was triggered, the cloud processing device 28 may automatically send a configuration update to the entity 20 gateway to raise the threshold of the gateway entity. In another example, the cloud processing device 28 may determine that the configuration of a gateway entity is outdated and that a more updated configuration exists in another nearby device. The cloud processing device 28 may send a configuration update to the outdated gateway entity based on the updated device configuration. 10 Accordingly, the cloud processing device 28 transmits, as input to the double balancing process 76, a configuration update to be applied in the gateway entity 20. Configuration updates include messages describing a change in configuration 58 of gateway entity 20. For example, configuration updates may update rules 60, filters 62, processing logic 64, and / or configuration parameters 74 of the affected gateway entity 20. The gateway entity 20 functions as a central hub or facilitator between the sensing / output devices and the external devices accessible through the network 22. Among other functions, the gateway entity 20: allocates resources to sensors or groups of 20 particular sensors in response to event notifications; processes sensor data in architecture 10; Forwards unprocessed data to other devices that are more suitable for processing the data; transmits status updates to third-party service 30; shoot output devices; send remote access requests to sensors; provides access to recorded video / audio data to third party service 30; and apply 25 configuration updates from the double balancing process 76. In some embodiments, the gateway entity 20 also generates voice announcements and audio feedback using text-to-speech and natural language processing based on the interaction dictionary of the cloud processing device 28. The gateway entity 20 may expose one or more Application Program Interfaces (APIs) to the other 30 devices of architecture 10 for these purposes. The double balancing process 76 accepts the inputs of the various devices as shown in FIG. 5, and processes the inputs to generate outputs. As part of the double balancing process 76, a number of different data structures can be used. Be23 below describe exemplary data structures suitable for use with the embodiments with reference to FIG. 6-10B. FIG. 6 shows an exemplary configuration update 78 that is used to update the configuration 58 of a gateway entity 20. Configuration update 78 includes a header 80 that identifies, among other things, the destination of configuration update 78. In some embodiments, header 80 identifies specific devices on which configuration update 78 should be deployed. Alternatively or additionally, header 80 may identify a group or class of devices on which configuration update 78 should be deployed, for example, gateway entities that monitor at least one smoke detector. In some embodiments, header 80 also includes other information, such as a timestamp, a priority, and a checksum. The timestamp identifies the time when configuration update 78 was sent, which can be used to order configuration updates that arrive in succession. In some cases, two configuration updates may conflict with each other, requiring this way for one configuration update to override the other. The timestamp can be used to determine which configuration update was sent first, under the assumption that this last configuration update was intended to override the previous one. If a first configuration update was transmitted before a second configuration update 20, then in some embodiments the latter, second, configuration update is applied and the first configuration update is canceled, regardless of the order in which the updates are received configuration on the device to be configured. In some embodiments, a priority value is used to determine which configuration update should override other configuration updates. For example, if a first configuration update that has a high priority is received and applied to a configured device, the configured device may decide not to apply a subsequent conflicting configuration update that has a lower priority. A checksum is used in header 80 to verify that configuration update 78 30 was received correctly and was not illegible in transmission. The checksum is applied to the transmission device by calculating a checksum value over the payload of configuration update 78, using any24 of a series of well-known checksum algorithms. The calculated checksum is added header 80. Upon receipt of configuration update 78, a checksum value is calculated on the payload of configuration update 78, and is compared with the checksum in header 80. If the two checksums match, then update 78 of 5 configuration is determined to have been received successfully. If the two checksums do not match, then the receiving device determines that an error occurred in the transmission or reception, and requests that the configuration update 78 be retransmitted. The different elements in configuration update 78 may be separated by a designated character such as an End of Line character, or a comma, or any other suitable character. When the configuration update 78 is parsed by the receiving device, the receiving device can separate the different elements based on the designated characters, and can modify the corresponding elements of the configuration 58 of the configured device. Alternatively or additionally, the different elements of the configuration update 78 may be provided in predefined locations in the configuration update, or may have a predefined size, or may have a variable size that is reported in header 80. Upon receipt of the configuration update 78, the receiving device may separate the elements of the configuration update based on its position in the message and / or size. Although configuration update 78 is shown with rules 60, filters 62, processing logic 64, and updated configuration parameters 74, some of these elements can be omitted from configuration update 78. For example, if only rules 60, or a part of a rule 60, are updated in a given configuration update 78 25, then the remaining elements are omitted from configuration update 78. Header 80 indicates which elements are updated in a given configuration update 78. An example of a rule 60 suitable for use in a configuration 74 or configuration update 78 is shown in FIG. 7. Rule 60 attempts to match a set 30 of conditions 82, for example, a first condition 84, a second condition 86, etc. , as defined in rule 60, with the conditions in architecture 10. When the set of conditions 82 is met, then one or more actions 88 are triggered. A condition is a predefined set of states, status, or parameter values that25 a device tries to match against states, status, or parameters in architecture 10. Examples of conditions 82 include the coincidence of an architecture state or a device with a predefined value or range of values, for example, the current security level is 1, 2 or 3; The smoke detector is in an "alarm" mode. Multiple states can be matched in a single condition, for example, two smoke detectors 5 separated from each other more than a predefined distance are in an "alarm" mode, a glass break sensor is triggered and a motion detector detects movement in the room. One or more of the conditions 82 may be time based, for example, the current time is 10:30 AM; The current time is between 10:00 PM and 6:00 AM. The set of conditions 82 may be an empty set; that is, without conditions, in which case action 88 is carried out immediately upon receipt of rule 60, and is subsequently discarded. Alternatively, a custom logic can be applied to define how to carry out rules that have no associated conditions 82. Some or all of the conditions 82 can be specified using logical operators such as AND, OR, XOR, NOT, etc. For example, rule 60 may specify that the first condition 84 and the second condition 86 both must be met for action 88 to be triggered. Alternatively, rule 60 could specify that either the first condition 84 or the second condition 86 must be met to trigger action 88. When the set of conditions 82 is adapted to a current state of the architecture 10 or 20 the device (s), the action 88 specified in the rule is carried out. An action 88 is a set of one or more instructions or tasks to be carried out by the device in which rule 60 is triggered. Examples of actions 88 include performing a task locally, for example, triggering an integrated notification device; process additional data, and forward instructions to other devices, for example, send a status update to gateway 20, scale the security level of architecture 10; shoot the dishwasher to start working. A rule 60 can specify the number of times the rule is to be carried out. This can be done, for example, by specifying a maximum number of applications as one of the conditions 84, and tracking the number of times rule 60 has caused action 88 to be triggered. Upon reaching the maximum number of applications, rule 60 is discarded. In addition to rules 60, configuration update 78 specifies filters 62. Two26 examples of filters 62 are shown in FIG. 8A and 8B. FIG. 8A represents an exemplary processing determination filter 90, which is a preprocessing filter applied to the data present in a local device to determine whether the data should initially be processed locally, or forwarded to a different location in architecture 10 for processing FIG. 8B represents an exemplary scaling filter 102, which is a post-processing filter applied after the data is processed locally in order to determine whether the data should be further processed by other devices in architecture 10. As shown in FIG. 8A, the processing determination filter 90 includes an evaluation logic 92. The evaluation logic 92 accepts input data and / or contextual information 10 about the data, for example, the type of sensor (s) that generated (generated) the data, the location in which it was deployed (deployed) on ( the) sensor (s), any initial processing that has been done on the data, etc. , and evaluates the data to determine if the data should be processed locally. The exemplary evaluation logic 92 evaluates the input data and / or the contextual information 15 against one or more thresholds 94 to determine whether the data should be processed locally. A threshold 94 represents a magnitude or intensity that must be met or exceeded for a given result to occur. In the example of the evaluation logic, thresholds 94 represent dividing lines that cause certain predefined actions to be performed depending on whether a measured parameter falls from one side or the other of the threshold 94. In the exemplary processing determination filter 90, the data is compared against a complexity threshold 96. A complexity threshold 96 represents a maximum complexity that the local device is able to tolerate in data while still being able to efficiently process the data. In the exemplary embodiment, evaluation logic 92 analyzes the data and contextual information about the data, and assigns a complexity score to the data. The complexity score can be calculated considering the type of sensor from which the data originated, the amount of the data, if the data values are stable or variable, if the data is clear or noisy, if the data includes any immediately recognizable pattern, etc. 30 If the complexity score reaches or exceeds the complexity threshold 96, then the evaluation logic 92 determines that the data is too complex for processing on the local device. If the complexity score is below the complexity threshold 96, then the evaluation logic 92 determines that the device27 local is able to process the data. The evaluation logic 92 also uses a load threshold 98 to perform load balancing. Load balancing refers to the distribution of tasks, jobs, or other work among multiple computing resources. In the exemplary embodiment, the evaluation logic 92 compares a load on the local processor (s), for example, a percentage of 5 local processing resources that are currently used, a number and / or complexity of jobs that are currently process, etc. , with load threshold 98. If the current load satisfies or exceeds the load threshold 98, then the evaluation logic 92 may determine that the processing task under consideration should be processed elsewhere. If the current load is below the load threshold 98, then the evaluation logic 92 may determine that the processing task should be performed locally. The evaluation logic 92 can be programmed with a list of accessible devices that have computer resources available for use, for example, the gateway entity 20, and an indication of the types of processing tasks in which the devices specialize. If the evaluation logic 92 determines that a processing task 15 should be forwarded to another device in the architecture, the evaluation logic 92 may consult the list to select a suitable target device. The devices in the list can be associated with a priority that indicates the order in which processing tasks should be sent to the devices listed. For example, among devices specialized in a particular type of data, for example, smoke detector data, the devices can be classified in order of priority. The next processing task received for that particular type of data can be sent to the highest priority device in the list. A query can be sent to the highest priority device to determine if the highest priority device is capable of performing a new processing task. If the highest priority device responds 25 recognizing its willingness to perform the task, the data can be sent to the highest priority device for processing. If the highest priority device responds by rejecting the processing request, the local device can move on to the next highest priority device in the list. This process can be repeated until a suitable device is selected. 30 The devices in the list can exchange messages, for example, through the gateway entity 20, to change their priority ranking on other devices. For example, if a given number of processing tasks is assigned to a given device and its processing load approximates a predefined tolerance of the load threshold 98 of the28 device, the overloaded device can send a message to the gateway entity 20 requesting that the priority of the overloaded device be lowered in the evaluation logic 92 of other devices of the architecture 10. Therefore, other devices are less likely to send processing tasks to the overloaded device. When the processing load of the overloaded device drops to a predefined level, or 5 after a predetermined amount of time, the priority of the device can be raised. A local device may also change the priority of a remote device in the evaluation logic 92 of the local device as the local device assigns tasks to the remote device. For example, if a gateway entity 20 sends a processing job to a first sensor 14, the gateway entity 20 may decrease the priority of the first sensor so that the next task is sent to a second sensor 16. In this way, the gateway entity 20 can distribute the tasks more evenly. The list may also include a default device located at the next highest level of hierarchy 32, compared to the local device currently being prepared to reallocate the processing task, to which tasks can be forwarded if it is not identified No other device. For example, the default device at intermediate level 36 of hierarchy 32 may be gateway 20, and the default device at upper level 38 of hierarchy 32 may be cloud or third-party processing device 30. 20 In addition to determining whether the data should be processed locally or remotely, the processing determination filter 90 also applies a set of notification rules 100 to any data received to determine whether the data should be recorded in a local memory, forwarded to others devices specified in architecture 10, or process and discard. The set of notification rules 100 matches conditions 82, such as a type of data, a time interval in which data should be recorded, recognized patterns in the data, etc. against the input data, potentially after the data is processed by the evaluation logic 92. If the conditions 82 match the data, the notification rule set 100 applies an action 88 such as storing the data in a local memory, for example, the memory 52 of the sensor 14, 30 or the memory 52 of the entity 20 of gateway, or forward the data to a device specified in action 88. If the processing determination filter 90 determines that the data should be processed locally, the data is processed according to the processing logic 72 of the29 local device. After processing by the processing logic 72, the device applies a scaling filter 102, as shown in FIG. 10B, to determine if the data should also be scaled to another device for further processing. The scaling filter 102 is applied if the processing logic 72 decides to take any action, decides to take a specific action such as raising an alarm, decides not to take any action, or any combination of possibilities. The scaling filter 102 has an evaluation logic 104 that determines whether the processed data should be scaled while being processed in another device. The evaluation logic 104 decides to scale the data for further processing if the processing logic 72 is unable to process the data. For example, if the data is voice data that includes commands, and the processing logic 72 is unable to identify the commands in the voice data with a high degree of confidence, the evaluation logic 104 may scale the data for further processing. at a higher level of hierarchy 32. The evaluation logic 104 consults a threshold 106, such as an escalation threshold 108, 15 in order to determine whether the data should be scaled. In an exemplary embodiment, the scaling threshold 108 is applied when the processing logic 72 determines not to take an action, but was within a predefined tolerance of taking the action, suggesting that the determination may be a false negative. Alternatively or additionally, the scaling threshold 108 is applied when the processing logic 64 determines to take an action, but was within a predefined tolerance of not taking the action, suggesting that the result of the determination may be a false positive. The scaling threshold 108 is a value or range of values that defines these tolerances. For example, the processing logic 64 may trigger an alarm in an output device 18 if the sensor data value v of a sensor is raised above a predefined alarm threshold. The scaling threshold can be set to an e value. If the sensor data value v rises above a, the processing logic 64 will trigger the alarm. If the sensor data value v is at or below the a-e value, then the processing logic 72 will determine that no alarm should be triggered, and the scaling filter 102 will not scale the data for further processing 30 by another device. If the sensor data is in a range {a-e <v <a}, then the processing logic 64 will not trigger the alarm, but the scaling filter 102 will forward the data to another device for further processing. 30 The scaling threshold 108 is modified by the security level modifiers 110. The security level modifiers 110 represent a value or values used to raise or lower the scaling threshold 108, depending on the current security level or state of the architecture 10, or one or more zones 12 in the architecture 10. As the security level or state changes, the security level modifiers 110 modify the scaling threshold 108 to make the evaluation logic 104 more or less prone to scale the data. For example, if the security level is raised, the evaluation logic 104 can be made more likely to scale the data for further processing. If the security level is relatively low, the evaluation logic 104 may become less likely to scale the data. 10 In a further embodiment, the evaluation logic 104 applies pattern recognition and scales the data if a particular pattern is identified in the data, regardless of whether the processing logic 64 decided to take an action in response to the data. The evaluation logic 104 of the scaling filter 102 selects a device to which the data should be forwarded in a manner similar to the way in which the evaluation logic 92 of the processing determination filter 90 selects a device to which they should be forwarded the data. The criteria for the evaluation logic 104 may also be different from the criteria for the evaluation logic 92. The scaling filter 102 is applied following the processing of the data by the processing logic 64. An example of the processing logic 64 is shown in FIG. 9. 20 The processing logic 64 includes an evaluation logic 112. The evaluation logic 112 accepts input data, such as data from a detector 40 of a sensor 14, or data aggregated from multiple sensors, and processes the data to transform the data into new output data, modify existing data, or perform a action. The processed data is compared with a threshold 116, such as a trigger threshold 118. The trigger threshold 25 118 defines a value that, if the value of the input data rises above or falls below, causes an action to be performed. Evaluation logic 112 also applies pattern matching to the data to determine whether to take action. If the evaluation logic 112 indicates that the data exceeds the threshold, or that the data matches a predetermined pattern, the evaluation logic 112 may determine that an event is currently in progress or that an event is about to occur. In this case, the evaluation logic 112 may generate an event notification to be forwarded to the gateway entity. 31 The input data and / or the processed data are also compared with a set of trigger rules 114. The set of trigger rules 114 defines the rules 60 in which the conditions 82 relate to the data that is processed. For example, a rule of the shooting rule set 114 may indicate that, if the data includes a pattern indicative of a person returning home, an output device 18 of 5 such as a light should be turned on. Another rule of the trigger rule set 114 may refer to the sending of a status update or notification to another device, such as a user's mobile device, the cloud processing device 28, or the third-party service 30. The rules of the trigger rule set 114 may be dependent on the location, for example, including location information as one of the conditions 82. For example, if the rule is a rule that is triggered by a fire alarm and triggers an action 88 to turn on a sprinkler system, one of the conditions of the rule may be that sprinklers should not be fired until absolutely necessary. if the output device (sprayer) is in a computer lab or in a server room. Turning now to the configuration parameters 66, 74, the exemplary parameters for the gateway entity 20 and the sensor 14 are shown in FIG. 10A and 10B, respectively. FIG. 10A represents the configuration parameters 74 for deployment on a gateway entity 20 20. The configuration parameters 74 specify a list of connected devices 120. The list of connected devices 120 includes an identifier for the devices that are, or should be, communicatively coupled to the gateway entity 20, as well as an indication of the type of device. The identifier can be a device address, for example, an IPv6 address. The list of connected devices 120 25 includes devices that the gateway entity 20 is responsible for monitoring, for example, the sensors 14, 16 and the output device 18 of the monitored zone 12, as well as other devices with which the entity 20 Gateway is capable of communicating, for example, the cloud processing device 28 and the third-party service 30. The configuration parameters 74 include a list of device conditions 122 that represent the status of the devices in the list of connected devices 120. The status of the devices reflects any, or a combination, of communication status, for example, communicatively connected to the gateway entity 20 and / or the network 22, device maintenance status, for example, the battery level of the32 device, if the device is scheduled for maintenance, if the device is reporting abnormal data, etc. , a device configuration status, for example, a list of the configuration ID (s) 138 for the devices) and other states. The list of device conditions 122 includes the condition of the gateway entity 124 itself, as well as a condition 126 for the sensors and a condition 126 for the output devices monitored by the gateway entity 20. Status conditions can be reported by the devices in response to a query from the gateway entity 20, at regular intervals, or they can be updated by the gateway entity 20, for example, in response to not receiving a response or an update expected from the 10 device. The configuration parameters 74 include intervals 130 of expected values for the configured device. The expected value ranges represent a range of values for one or more operating parameters or features of the configured device that indicate the normal operation of the device. If the device generates an operational parameter 15 or has a characteristic outside the ranges 130 of expected values, this may indicate a malfunction of the configured device that requires maintenance. The configuration parameters may, therefore, include a set of maintenance rules with a set of rules 60 to be applied when one or more operational parameters or features fall outside the ranges 130 of 20 expected values. The set of maintenance rules 132 may specify actions, such as reporting a malfunction to the third party service or the user, or performing maintenance operations such as restarting the device, using alternative hardware or software, if available, or restoring the device. to a last well known configuration. 25 The configuration parameters 74 also include a set of safety rules 134 that includes rules 60 that specify actions 88 to be taken in the event that an alarm condition is raised or the security level 136 of architecture 10 is changed. Security level 136 represents a surveillance level or a monitoring status of architecture 10 or a part of architecture 10. Security level 136 can be specified as a quantitative value, for example, level 1, level 2, etc. , or it can be specified as a set of "modes". Examples of "modes" are shown in Table 1 below:33 Mode Name Description of the Safe Mode The monitored area (s) are safe. The sensors are enabled and work properly. No occupants are present in the monitored area (s), except possibly pets. Watchman The monitored area (s) are relatively safe. Occupants may be present. At Risk Transition Status indicating that the monitored area (s) is not safe. An occupant may be trying to authenticate. Intruders Something is wrong in the monitored area (s). It could indicate the presence of intruders or vandalism. Emergency A life-threatening condition, such as a fire or gas leak. Problem A significant problem has been detected, such as a flood, power failure, or inoperable device. Discomfort A minor problem has been detected. TABLE 1 The set of security rules 134 includes default actions to be taken as long as security level 136 is in a particular state. For example, if security level 136 is set to "emergency" mode, the set of security rules 134 can cause data requests to be repeatedly sent to a relevant sensor. 5 The configuration parameters 74 displayed on the device can be customized to the device, to the location where the device is deployed, and / or based on other considerations. In order to identify which configuration is present in which device, which can be used, for example, to determine if a particular device is very suitable for processing certain types of data, the configuration parameters 74 can be associated with one or more Configuration ID (s) 138. The configuration ID (s) 138 may, for example, be a checksum, and an identification string, or a series of marks that uniquely identify a part or all of a set of parameters 74 Of configuration. 34 The configuration parameters 74 also include default configuration settings 140. The default configuration settings 140 are settings for some or all of the configuration parameters 74 that apply under certain conditions, such as when the device is booted or restarted, or when a configuration parameter 74 is damaged or otherwise It becomes unusable. As configuration updates 5 78 are received, the default configuration settings 140 may optionally be updated with the new configuration settings contained in update 78. As shown in FIG. 10B, the configuration parameters 66 for deployment on a sensor 14 are similar to the gateway entity configuration parameters 74. Because the sensor 14 is typically not responsible for monitoring other devices in the architecture 10, some of the elements of the gateway entity configuration parameters 74 can be removed in the sensor configuration parameters 66. Rules 60, filters 62, processing logic 64, and configuration parameters 66, 74 are applied by devices in the architecture to process input data from one or more sensors 14. The methods performed by the devices in the architecture 10 15 will be described below with reference to FIG. 11-15 FIG. 11 is a data flow diagram showing a data flow through architecture 10. For clarity of discussion, FIG. 11 focuses primarily on the data processing and configuration update aspects described above of the double balancing process 76. Some of the other processes described above are omitted from the data flow diagram for clarity. Initially, the primary sensor 14 generates sensor data and performs a filtration step 142 to determine whether to process the sensor data locally, on the primary sensor 14, or forward the sensor data to the gateway entity 20. In the filtering process, the input data of the sensor data temporary store 54 is retrieved by the processor 42. The data can be optionally added with other data. The processor 42 then applies the processing determination filter 90 to the data. Based on the logic and thresholds of the processing determination filter 90, the processor 42 determines whether the filter (s) indicates that the data should be recorded locally. If so, the data is stored in the memory of the local device. The data 30 can be stored for a predetermined amount of time, or until the data is deleted. Alternatively or additionally, the filter (s) may indicate that the data should be recorded, but on a remote device. Therefore, the data can be35 forward to the remote device for registration. After either the data is recorded, or a determination is made that the data does not need to be recorded, the processor 42 determines whether the filter (s) indicates that the data should be processed locally. If not, then the data is forwarded to the next destination indicated by the evaluation logic 92 (for example, the gateway entity 20). If, on the other hand, the processor 42 determines that the data should be processed locally, then the processor loads the processing logic 64 and processes the data in the local processing step 144. In step 144 of local processing, there are several possible outcomes. One possible result is that the processed data does not trigger any action. If the processed data 10 does not trigger an action and an escalation filter 102 does not indicate that the data should be scaled for further processing, no action is taken and the data flow begins again using the new data generated by the sensor 14 primary. If the scaling filter 102 indicates that the data should be scaled for further processing, then the sensor data is forwarded to the gateway entity 20. Another possible result is that local processing 144 triggers a tracking action, such as an event notification or an action performed by an output device. In these situations, the local processing step 144 generates an event notification and forwards it to the gateway entity 20, and / or generates a trigger and sends it to a primary output device 18. 20 If the local processing 144 performed on the sensor 14 indicates that an event is occurring or about to occur, the sensor 14 generates an event notification and sends the event notification to the gateway entity 20. Event notification can be sent prior to unprocessed or processed data from sensor 14. Upon receiving the event notification, the balancer 68 of the gateway entity 20 allocates resources, for example, 25 processing resources or memory resources, to the sensor 14 and prepares to receive data from the sensor 14, for example, by creating or updating one or more critical timing paths 24 to the sensor 14. The sensor 14 then forwards the sensor data directly to the gateway entity 20 for evaluation. Accordingly, the sensor 14 can be programmed with a relatively simple processing logic 30 that identifies when events are occurring or about to occur, but does not necessarily have the complexity to fully process the data during an event. The most complex processing logic 64 is displayed in the36 gateway entity 20, which processes the data when the sensor 14 makes the initial determination that the data suggests the occurrence of an event. If the local processing step 144 causes a status update to be sent to the gateway entity 20, the gateway entity 20 processes the status change, for example, changing the security level 136 and applying the applicable rules from the 5 set of security rules 134. This may involve firing one or more output devices 18. If the filtration step 142 or the local processing step 144 performed by the primary sensor 14 causes the sensor data to be sent to the gateway entity 20 for further processing, the gateway entity 20 may optionally apply a step 10 142 of filtration to determine whether the gateway entity 20 should process the sensor data locally, or through a secondary sensor that is reachable by the gateway entity 20. If so, the gateway entity performs a local processing step 144 on the sensor data by applying the processing logic 64 of the gateway entity to the data. 15 In step 144 of local processing performed by the gateway entity 20, there are several possible results. One possible result is that the processed data does not trigger any action. If the processed data does not trigger an action and an escalation filter 102 does not indicate that the data should be scaled for further processing, no action is taken and the data flow can be started again using the new data 20 generated by the sensor 14 primary. If the scaling filter 102 indicates that the data should be scaled for further processing, then the sensor data is forwarded to the cloud processing device 28. Another possible result is that the local processing 144 triggers a tracking action, such as a change of state or an action performed by an output device. In these situations, the local processing step 144 generates a status update and forwards it to the third-party service 30, changes the security level 136 at the gateway entity 20 (if necessary), and triggers any applicable rule from the set of security rules 134. For example, the local processing step generates a trigger and forwards it to a primary output device 18. 30 Yet another possible result is that the gateway entity 20 determines, either initially or as the data is processed, that the data should be forwarded to a secondary sensor 16 that is very suitable for processing the sensor data. For example, the sensor37 16 secondary can be deployed with a specialized configuration 58 that is particularly well suited for processing the type of data received from the sensor 14. Accordingly, the local processing step 144 of the gateway entity 20 may forward the sensor data to the secondary sensor 16 for processing, and may receive a status update in response. 5 Alternatively or additionally, the local processing step 144 may determine that supplementary data is needed in order to process the sensor data. If the secondary sensor 16 has already stored data in the temporary data storage 70 of the sensor of the gateway entity 20, the gateway entity retrieves the data of the secondary sensor from its memory 52. Alternatively, the local processing step 144 may send a request to the secondary sensor 16, and receive sensor data from the secondary sensor 16 in response. The filtering step 142 and / or the local processing step 144 performed by the gateway entity 20 can cause the sensor data to be forwarded to the processor 28 in the cloud for further processing. The processor 28 in the cloud applies a step 142 of local filtration (not shown) and a processing step 144 to the data. Similar to the local processing step 144 performed by the gateway entity 20, the processor 28 in the cloud may determine that additional data is needed from a secondary sensor 16. If the local processing step 144 performed by the cloud or third-party processor 30 generates a status update and / or any trigger for the output device 18, the status update and the trigger (s) are they send to the gateway entity 20 and / or the third party service 30 to be acted upon accordingly. Configuration updates can also be sent to the gateway entity 20 as an output of the local processing step 144 performed by the cloud processing device 28. Configuration updates may change the configuration settings of the gateway entity 20. It is noted that, although FIG. 11 shows the sensor 14 that forwards data to the gateway entity 20 for analysis, the sensor 14 can also forward data directly to the cloud or third-party processing device 30. 30 FIG. 12 represents an exemplary operation procedure 146 suitable for use by sensor 14, and any other sensor in architecture 10. The procedure begins in step 148, where the sensor is initialized. This may involve, for example, performing38 system startup checks, load default configuration settings 140 from memory, set any relevant parameters in configuration 58 based on default configuration settings 140, initialize temporary stores 54, 56, establish communication with the gateway entity 20 through the communication interface 46, and apply relevant maintenance rules from the set of maintenance rules 5 132. The processing then proceeds to step 150, where the sensor 14 checks the network temporary store 56 for new messages. If the sensor 14 determines, in step 152, that the network temporary store 56 includes a new trigger or remote access request, for example, of the gateway entity 20, then the processing proceeds to step 10 154 and the processing is processed. trigger message or request. In the case of a trigger, the sensor 14 can parse the trigger message to recover an action that the sensor 14 requests to take, such as firing an output device 18 accessible to the sensor 14. In the case of a remote access request, the sensor 14 parses the request to identify an entity that is requesting access to the sensor capabilities. Sensor 14 evaluates or authenticates the entity to determine if the entity is authorized to access the requested capabilities. If so, the sensor opens or connects to a communication channel with the requesting entity, and executes authorized commands from the requesting entity that allow the requesting entity to control the 20 sensor capabilities that were the subject of the request. For example, the third-party service 30 may submit a request for remote access to a video camera, and may issue commands to control the positioning capabilities of the video camera, for example, pan, tilt, and rotate. Alternatively, the third-party service 30 could request access to audio and / or video data from the camera, and could be provided with access to the temporary data store 54 of the camera. The processing then returns to step 150, where the network temporary store 56 is checked for additional triggers or remote access requests. If the determination in step 152 is "NO"; that is, no new triggers or remote access requests are present in the network temporary store 56, the processing proceeds to step 156 and the next batch of data is retrieved from the temporary data store 54. The processing then proceeds to step 158, where the sensor 14 determines if an event is already39 in progress. For example, if the preprocessing performed by the sensor caused the sensor to send an event notification to the gateway entity 20, then the sensor 14 may establish a "current event" mark at a designated location in the sensor memory 52 . When the sensor 14 or the gateway entity 20 determines that the event is over, the sensor 14 may reset the "event in progress" mark. In step 158, the sensor 14 can check the current event mark to assess whether an event is currently in progress. If the result in step 158 is "YES"; that is, an event is currently in progress, then the sensor 14 goes directly to step 166 and forwards the sensor data from the temporary data store 54 directly to the gateway entity for processing. If the result in step 158 is "NO"; that is, an event is not currently in progress, then the sensor 14 goes to step 160 and performs the filtration and / or processing, for example, corresponding to steps 142 and 144 of FIG. 11, about the data. Based on the filtration and / or processing, the sensor evaluates, in step 162, if the newly processed data 15 indicates that it is currently occurring or an event is expected to occur. If not, then the processing returns to step 150 and the temporary store is checked again for new messages. If the determination in step 162 is "YES"; that is, an event is occurring or is expected to occur, then the processing proceeds to step 164 and the sensor 14 transmits an event notification 20 to the gateway entity 20 to inform the gateway entity 20 that the sensor data will be arriving at the gateway entity in the near future. In response, the gateway entity allocates resources in preparation to receive the data. The processing then proceeds to step 166, in which the next batch of raw data is transmitted to the gateway entity 20. 25 Some or all of the steps of the operation procedure 146 can be performed in parallel, if the processor 42 of the sensor 14 supports parallel processing. For example, FIG. 12 separates the steps used to process triggers and remote access requests from the steps used to process the sensor data. Triggers / remote access requests are made on a first thread 168, and the sensor data processing steps are performed on a second thread 170. If the steps of the operation procedure 146 are to be performed in parallel, then the initialization step 148 may include the creation of new threads for parallel procedure sets. 40 FIG. 13 represents a corresponding operation procedure 172 suitable for performing by a gateway entity 20. Procedure 172 begins in step 174, when the gateway entity 20 is initialized. This may involve, for example, performing system startup checks, loading the default configuration settings 140 from memory, setting any relevant parameter in configuration 58 on base 5 to the default configuration settings 140, initializing the temporary stores 54, 56 of data, establish communication with the devices in the list of connected devices 120, and apply relevant maintenance rules from the set of maintenance rules 132. The processing then goes to step 176, where the network temporary store 10 56 is checked to determine if there is any message pending evaluation. Because the gateway entity 20 handles many different types of messages, the messages are classified in steps 178, 184, 188 and 190. Different types of messages are handled in order of priority, for example, messages that have event notifications, which could include an alarm condition, can be processed before messages that have new sensor data for processing. In step 178, the gateway entity 20 determines if there is a pending event notification. If so, the processing proceeds to step 180 and the event notification is processed. In step 180, the balancer 68 of the gateway entity allocates resources for the sensor 14 that presented the event notification. The amount of resources allocated may depend on the type of sensor that presented the event notification or the type of event. For example, the data processing of a smoke detector may involve relatively simple checks, such as if the detector 40 reads a value above a threshold indicating the presence of smoke in the room and / or checking of a nearby thermometer. On the other hand, the data processing of a glass breakage sensor may involve 25 complex audio processing steps and, therefore, the balancer 68 may allocate more resources in response to an event notification of a breakage sensor. glass of a smoke detector. The processing then proceeds to step 182, and the security rule set 134 is evaluated / executed. If security level 136 is changed by the set of security rules, the gateway entity 20 can update security level 136. Once the event notification is addressed, the processing then returns to step 176 and the network temporary store 56 is checked for additional messages. In step 184, the gateway entity 20 determines if there is a new trigger message41 pending. If so, the processing proceeds to step 186 and the gateway entity forwards the trigger message to the affected output devices 18. The processing then returns to step 176 and the network temporary store 56 is checked for additional messages. In step 188, the gateway entity 20 determines if there is new sensor data to be processed. If so, the processing proceeds to steps 142 and 144 and the filtering and processing methods of the gateway entity are performed. If multiple lots of data are pending to be processed, the gateway entity 20 can prioritize the data for which an event notification has been received, and can prioritize high priority events over low priority events. After the sensor data is processed, the processing returns to step 176 and the network temporary store 56 for additional messages is checked. In step 190, the gateway entity 20 determines if there is any pending configuration message. If so, the processing proceeds to step 192 and the next configuration update 78 of the network temporary store 56 is retrieved. In step 15 194, the retrieved configuration update 78 is parsed to separate the respective elements, for example, rules 60, filters 62, processing logic 64, and configuration parameters 66, from update 78 of setting. For example, if the elements are separated by a designated character, the gateway entity reads the configuration update 78 until the designated character 20 is reached, and identifies the data read with the appropriate element of the configuration update 78. Alternatively, header 80 may specify where to find the respective elements of configuration update 78. In step 196, the respective elements are evaluated to determine how to update the configuration 74 of the gateway entity. For example, gateway entity 20 determines if the configuration update element 78 is a new configuration element or is a new version of an existing configuration element already deployed in the gateway entity 20. If there is no corresponding configuration element, for example, the configuration element is a new rule to be added to the trigger rule set 114, then the configuration element 30 is added to the configuration 74. If a corresponding configuration element exists, for example, the configuration element is a new version of an existing rule in the trigger rule set 114, then the new configuration element overwrites the old configuration element. 42 The processing then returns to step 176 and the network temporary store 56 is checked for additional messages. Some or all of the steps of the operation procedure 172 may be performed in parallel. FIG. 13 represents an exemplary embodiment in which status updates are processed in a first thread, trigger messages are processed in a second thread, sensor data is processed in a third thread, and status updates are processed in A fourth thread. If the steps of the operation procedure 172 are to be performed in parallel, then the initialization step 174 may include the creation of new threads for sets of parallel procedures. The exemplary procedures described in FIG. 11-13 can be part of the double balancing process 10 76. These procedures can be supplemented with additional procedures as necessary or applicable. FIG. 14A-14B represent an example of the double balancing process 76 in operation. In step 198, the gateway entity 20 sends a monitoring message to a first sensor 14, requesting that the sensor confirm that it is operational and connected to the gateway entity 20 15. In step 200, the sensor responds by acknowledging the monitoring message, and therefore the gateway entity does not take any action (step 202) in response to recognition. On the contrary, when the gateway entity 20 sends a monitoring message, in step 204, to a second sensor 16, the gateway entity receives no response 20 (step 206). Accordingly, in step 208, the gateway entity signals that the second sensor 16 has failed, and reports the failure to the cloud processing device 28. The cloud processing device 28 indicates, in step 210, that the second sensor has failed, and reports the failure to the third party service 30. In step 212, the third party service 30 indicates that the second sensor 16 has failed. 25 In step 214, the first sensor 14 detects the occurrence of an event, and sends an event notification to the gateway entity 20. In response to the event notification, in step 216 the gateway entity 20 invokes the balancer 68 to allocate resources for the first sensor 14. In step 218, the first sensor 14 sends raw data to the gateway entity 20, which the gateway entity 20 processes in step 220. In this example, the gateway entity 20 determines, in step 220, that the data triggers an action from the set of trigger rules 114. In this case, the action involves generating a vocal warning back to the43 first sensor 14 in order to request additional information. Accordingly, in step 224, the gateway entity 20 generates a voice prompt, for example, "An event has been detected by the first sensor. Need help ". The voice prompt can be a predetermined warning stored in the trigger rule set 114, or it can be generated dynamically by the gateway entity 20. For example, the gateway entity 20 may generate a text file containing information to be transmitted to the first sensor 14, and may use text-to-speech algorithms to convert the text file to an audio stream. In step 226, the gateway sends a remote access request to the first sensor 14, requesting that the first sensor 14 give control of the sensor's speaker and microphone. In response, the sensor 14 opens a channel with the gateway entity 20 to allow bidirectional audio communication to take place, and reproduces the warning generated in step 224 through the sensor speaker. In step 228, the gateway entity 20 receives recognition that the sensor 14 has accepted the remote access requests, and begins a bi-directional interaction with any user in the presence of the sensor 14, using audio feedback and speech algorithms 15 to text. The gateway entity 20 forwards the audio data received from the sensor 14 to the cloud processing device 28 in step 230, which in turn performs advanced data processing, records and records the audio for future reference, and facilitates the processing of the requirements in the gateway entity 20 dynamically generating an interaction dictionary based on the content of the received audio. For example, the cloud processing device 28 provides the gateway entity 20 with a list of words or phrases that are applicable in the context of the received audio, and the gateway entity uses the list of words and phrases to continue a real-time conversation through the first sensor 14. For example, in response to the initial message from the gateway entity "do you need 25 assistance " A user may have responded with "yes, there is a fire in the house". In response, the cloud processing device 28 generates an interaction dictionary that includes phrases such as "how many people are in the house ", "Can everyone safely leave the house ", And "where is it is the person currently in need of assistance ”If the gateway entity 20 finds out that someone cannot leave the house safely, then the gateway entity 20 can then, using the interaction dictionary, generate a warning asking where the User who needs assistance. In step 232, the sensor 14 receives an audio input that recognizes the event; by44 example, "yes, there is a fire in the house." The sensor 14 keeps the audio channel open in step 226 and continues to forward audio data to the gateway entity 20. In step 234, having received an acknowledgment that the event is taking place, the gateway entity 20 determines that it should contact other sensors in the vicinity of the first sensor 14. Accordingly, the balancer 68 of the gateway entity 20 allocates additional resources for the multiple sensors. In step 236, the gateway entity 20 broadcasts an audio alarm, for example, "an emergency has been detected; please proceed calmly to the nearest exit ”, to all sensors in the immediate vicinity of the first sensor 14. For example, the audio alarm may be in the form of a trigger message containing an audio recording and instructions for playing the audio recording through the sensor speakers. In steps 238-242, the sensors in the vicinity of the first sensor 14, except for the second sensor 16, which exhibited a failure in steps 206-212, receive the audio alarm and reproduce the audio alarm through their respective speaker. Meanwhile, in step 238, the cloud processing device 28 initiates a call to a third-party service monitoring station 30. In addition, in step 240, the cloud processing device 28 initiates a call to 911 to convene the first responders. The cloud processing device 28 transfers the 911 call to the third-party service 30, which connects the call in step 242 and transmits the GPS coordinates, for example, from the gateway entity 20 and / or the first sensor 20 ) to service 911. Simultaneously, in step 244, the gateway entity 20 remotely connects to the third-party service 30 and the service 911, and submits a request for remote access to the accessible sensors, steps 246-250. The gateway entity 20 accesses the sensor data and provides it to the third party service 30 and the 911 service. In step 252, the third-party service 30 receives the sensor data and maintains an audio and video connection to the monitored zone 12 through the remotely accessed sensors. Once it is determined that the event is over, the gateway entity 20, in step 254, invokes the balancer 68 to release the resources allocated to the event, and returns to a supervisory mode. 30 As can be seen from this example and the embodiments described above, the double balancing process 76 allows processing works in architecture 10 to be distributed among the different levels of hierarchy 32 as appropriate, saving45 processing resources in the sensors 14, 16 and the gateway entity 20. Because the sensors only need to process basic information to determine if an event is occurring and then they can forward the data to the gateway entity 20, the sensors can operate with less processing resources, thus making them less expensive and more capable of operate in low power or idle modes. 5 In addition, complex processing tasks can be performed at the highest levels of hierarchy 32, allowing more complicated data analysis and procedures, such as real-time audio interactions, to be performed. As used herein, an element or step set forth in the singular and preceded with the word "a" or "one" should be understood as not excluding plural elements or steps, 10 unless such exclusion is explicitly stated. In addition, references to "an embodiment" do not have to be interpreted as excluding the existence of additional embodiments that also incorporate the features set forth. Although certain embodiments of the description have been described herein, it is not intended that the description be limited thereto, since it is intended that the description 15 be as broad in scope as the technique allows and that the specification be read in the same way. Therefore, the above description should not be construed as limiting, but merely as exemplifications of particular embodiments. Those skilled in the art will imagine other modifications within the scope and spirit of the claims appended hereto. twenty
权利要求:
Claims (20) [1] 46 CLAIMS 1. A resource balancing apparatus in an automation and alarm architecture comprising: a memory operable for storing data from a sensor; and a processor operable to: 5 receive an event notification from the sensor indicating that an event has occurred or is expected to occur, allocate processing or memory resources for the sensor in response to the event notification, receive data from the sensor, 10 process the received data using the allocated resources, and release the resources when the event is concluded. [2] The apparatus of claim 1, wherein the memory further stores specific processing logic for a type of data generated by the sensor, and the processor calls the processing logic to process the received data. fifteen [3] The apparatus of claim 2, wherein the processor is further operable to: receive a configuration update that changes the processing logic, and process the configuration update to change the processing logic stored in memory. [4] The apparatus of claim 1, wherein the processor is further operable to transmit a remote access request to the sensor, the remote access request instructing the sensor to give control over one or more components of the sensor to the sensor. apparatus. [5] The apparatus of claim 1, wherein the received data includes audio data, and the processor is further operable to: generate a real-time audio interaction using an interaction dictionary provided by a processing device at the cloud. [6] The apparatus of claim 1, wherein the processor is further operable to47 generating a trigger in response to received data, the trigger instructing an output device to take action. [7] The apparatus of claim 1, wherein the allocation of the resources is made based on a type of the sensor or a type of the data provided by the sensor. [8] 8. An apparatus comprising: a detector operable to generate sensor data, a memory operable to store sensor data; and a processor operable to: evaluate sensor data to determine if sensor data indicates that an event is occurring or is expected to occur, 10 generate an event notification when the evaluation determines that an event is occurring or is expected to occur , send the event notification to a gateway entity, and after forwarding the event notification, forward the sensor data to the gateway entity. fifteen [9] The apparatus of claim 8, wherein the processor is further operable to: receive a remote access request from an entity, the remote access request that instructs the processor to give control over a component of the apparatus to the entity; evaluating the entity to determine if the entity is authorized to access the component; and provide control over the component to the entity when the entity is authorized to access the component. [10] The apparatus of claim 8, further comprising an output device, wherein the processor is further operable to: receive a trigger requesting the apparatus to take action with respect to the output device, and trigger the device exit to take action.48 [11] The apparatus of claim 8, wherein the processor is further operable to establish a critical timing path between the apparatus and the gateway entity, the critical timing path being configured to provide real-time or near-time interaction. real between the gateway entity and the appliance. [12] 12. A method for balancing resources in an automation and alarm architecture comprising: receiving, in a processor of a gateway entity from a sensor, an event notification indicating that an event is occurring or is expected to occur in the sensor; allocating, in response to the event notification, one or more resources to the gateway entity; 10 establishing a critical timing path between the gateway entity and the sensor; receiving data from the sensor at the gateway entity; process the data received in real time or near real time; and triggering an action in response to the received data. [13] The method of claim 12, wherein triggering the action comprises forwarding the received data to a cloud processing device for advanced processing. [14] The method of claim 12, wherein triggering the action comprises triggering an output device to activate an alarm. [15] The method of claim 12, wherein triggering the action comprises providing data from the sensor to a first responder service. [16] 16. The method of claim 12, wherein triggering the action comprises engaging in a two-way audio interaction with the sensor. [17] The method of claim 16, wherein the two-way audio interaction is directed based on an interaction dictionary provided by a cloud processing device. [18] 18. The method of claim 12, further comprising: determining that the event is complete, and releasing the allocated resources.49 [19] 19. The method of claim 12, further comprising: receiving a configuration update that changes a way that the gateway entity processes data from the sensor, and applying the configuration update. [20] The method of claim 12, wherein the gateway entity determines, in response to sensor data, to interact with additional sensors, and the gateway entity allocates additional resources for interaction with multiple sensors. 10
类似技术:
公开号 | 公开日 | 专利标题 ES2646632B2|2019-05-03|Method and apparatus for balancing resources in an automation and alarm architecture US10397042B2|2019-08-27|Method and apparatus for automation and alarm architecture US10803720B2|2020-10-13|Intelligent smoke sensor with audio-video verification US10242541B2|2019-03-26|Security and first-responder emergency lighting system US10440130B2|2019-10-08|Thermostat and messaging device and methods thereof US10600292B2|2020-03-24|Enhanced emergency detection system JP2017506788A|2017-03-09|Smart emergency exit display US9721457B2|2017-08-01|Global positioning system equipped with hazard detector and a system for providing hazard alerts thereby JP2015512548A|2015-04-27|Monitoring system EP3268943B1|2021-06-02|Providing internet access through a property monitoring system KR20130134585A|2013-12-10|Apparatus and method for sharing sensing information of portable device WO2017117674A1|2017-07-13|Intelligent smoke sensor with audio-video verification JP6934585B1|2021-09-15|Transmission of sensor signals depending on the orientation of the device US11176799B2|2021-11-16|Global positioning system equipped with hazard detector and a system for providing hazard alerts thereby US20210097827A1|2021-04-01|Systems and methods for alerting disaster events JP2018106356A|2018-07-05|Facility control system and facility control method WO2021236450A1|2021-11-25|Operating wireless devices and image data systems WO2018093802A1|2018-05-24|Leveraging secondary wireless communications functionality for system setup and/or diagnostics
同族专利:
公开号 | 公开日 GB2550476B|2021-07-14| GB2550476A|2017-11-22| GB201704954D0|2017-05-10| US20160098305A1|2016-04-07| ES2646632B2|2019-05-03| ES2646632R1|2018-03-13| US10592306B2|2020-03-17| WO2016049778A1|2016-04-07| MX2017004286A|2017-06-26|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5309146A|1988-05-03|1994-05-03|Electronic Environmental Controls Inc.|Room occupancy indicator means and method| US5568535A|1992-06-01|1996-10-22|Trackmobile, Inc.|Alarm system for enclosed area| US5864286A|1995-05-16|1999-01-26|General Signal Corporation|Distributed intelligence alarm system having a two- tier monitoring process for detecting alarm conditions| US7015806B2|1999-07-20|2006-03-21|@Security Broadband Corporation|Distributed monitoring for a video security system| US6317042B1|2000-05-01|2001-11-13|Lucent Technologies Inc.|Automated emergency announcement system| US7047196B2|2000-06-08|2006-05-16|Agiletv Corporation|System and method of voice recognition near a wireline node of a network supporting cable television and/or video delivery| US6658091B1|2002-02-01|2003-12-02|@Security Broadband Corp.|LIfestyle multimedia security system| US7629880B2|2004-03-09|2009-12-08|Ingrid, Inc.|System, method and device for detecting a siren| US20090120653A1|2003-07-31|2009-05-14|Michael Steven Thomas|Fire suppression delivery system| US7380039B2|2003-12-30|2008-05-27|3Tera, Inc.|Apparatus, method and system for aggregrating computing resources| US7193508B2|2004-02-12|2007-03-20|Hill Philip A|Portable security system| US8248226B2|2004-11-16|2012-08-21|Black & Decker Inc.|System and method for monitoring security at a premises| US8041772B2|2005-09-07|2011-10-18|International Business Machines Corporation|Autonomic sensor network ecosystem| US20070249319A1|2006-04-24|2007-10-25|Faulkner Mark A|Power distribution communication system employing gateway including wired and wireless communication interfaces| US8065411B2|2006-05-31|2011-11-22|Sap Ag|System monitor for networks of nodes| US7971143B2|2006-10-31|2011-06-28|Microsoft Corporation|Senseweb| US20080177683A1|2007-01-23|2008-07-24|No Brian Y|Method and apparatus for mobile intelligence| US20090195382A1|2008-01-31|2009-08-06|Sensormatic Electronics Corporation|Video sensor and alarm system and method with object and event classification| WO2009111801A2|2008-03-07|2009-09-11|Tendril Networks, Inc.|Apparatus and method for dynamic licensing access to wireless network information| US20090273462A1|2008-05-01|2009-11-05|Honeywell International Inc.|Using fixed mobile convergence femtocells for alarm reporting| US8886206B2|2009-05-01|2014-11-11|Digimarc Corporation|Methods and systems for content processing| US9210220B2|2008-09-29|2015-12-08|Andrew Steckley|System and method for intelligent automated remote management of electromechanical devices| US9679449B2|2008-12-30|2017-06-13|Oneevent Technologies, Inc.|Evacuation system| DE102009016154A1|2009-04-03|2010-10-14|Hekatron Vertriebs Gmbh|Thermoelectric generator assembly, thermal switch and method of operating an electrical device| US20110317007A1|2010-06-24|2011-12-29|Kim Ki-Il|Smoke and carbon monoxide alarm device having a video camera| GB2483057B|2010-08-20|2012-11-28|Wireless Tech Solutions Llc|Apparatus, method and system for managing data transmission| CN102487525A|2010-12-02|2012-06-06|中国移动通信集团上海有限公司|Alarm information transmission method, wireless sensor node equipment and gateway node equipment| US20120154126A1|2010-12-16|2012-06-21|Alan Wade Cohn|Bidirectional security sensor communication for a premises security system| US8907799B2|2011-03-07|2014-12-09|Flamesniffer Pty Ltd|Fire detection| US20130150686A1|2011-12-07|2013-06-13|PnP INNOVATIONS, INC|Human Care Sentry System| US9477936B2|2012-02-09|2016-10-25|Rockwell Automation Technologies, Inc.|Cloud-based operator interface for industrial automation| US9372213B2|2012-02-15|2016-06-21|Alpha and Omega, Inc.|Sensors for electrical connectors| US8710983B2|2012-05-07|2014-04-29|Integrated Security Corporation|Intelligent sensor network| US9046414B2|2012-09-21|2015-06-02|Google Inc.|Selectable lens button for a hazard detector and method therefor| CN103685195A|2012-09-21|2014-03-26|华为技术有限公司|User verification processing method, user device and server| US8498864B1|2012-09-27|2013-07-30|Google Inc.|Methods and systems for predicting a text| US9064389B1|2012-11-13|2015-06-23|e-Control Systems, Inc.|Intelligent sensor for an automated inspection system| US10529215B2|2012-11-16|2020-01-07|Vapor Products Group, Llc|Remote environmental condition monitoring and reporting| US20150313172A1|2012-11-23|2015-11-05|Robin JOHNSTON|System and method for monitoring farm related equipment| US9262906B2|2013-03-14|2016-02-16|Comcast Cable Communications, Llc|Processing sensor data| US9710251B2|2013-03-15|2017-07-18|Vivint, Inc.|Software updates from a security control unit| US9123221B2|2013-05-20|2015-09-01|Apple Inc.|Wireless device networks with smoke detection capabilities| CN206021193U|2013-07-18|2017-03-15|谷歌公司|For the system for processing ultrasound input| US9685067B2|2013-10-31|2017-06-20|At&T Intellectual Property I, L.P.|Machine-to-machine emergency communications| WO2015072022A1|2013-11-15|2015-05-21|富士通株式会社|System, communication node, and determination method| US20150350303A1|2014-05-29|2015-12-03|Chia-I Lin|Manufacturing optimization platform and method| US9741344B2|2014-10-20|2017-08-22|Vocalzoom Systems Ltd.|System and method for operating devices using voice commands|US9812126B2|2014-11-28|2017-11-07|Microsoft Technology Licensing, Llc|Device arbitration for listening devices| US10958531B2|2015-12-16|2021-03-23|International Business Machines Corporation|On-demand remote predictive monitoring for industrial equipment analysis and cost forecast| CN106899497A|2015-12-18|2017-06-27|阿基米德自动控制公司|Intelligent multi-channel wireless data obtains gateway| CN106559579B|2016-11-30|2020-08-21|契科基纳科技有限公司|Mobile terminal and CPU/GPU scheduling parameter updating method| US10923104B2|2017-06-30|2021-02-16|Ademco Inc.|Systems and methods for customizing and providing automated voice prompts for text displayed on a security system keypad| US10673715B2|2017-07-20|2020-06-02|Servicenow, Inc.|Splitting network discovery payloads based on degree of relationships between nodes| US11134320B2|2017-08-03|2021-09-28|Omron Corporation|Sensor management unit, sensor device, sensor management method, and sensor management program| US10833923B2|2017-10-26|2020-11-10|Skylo Technologies Inc.|Dynamic multiple access for distributed device communication networks with scheduled and unscheduled transmissions| US10306442B1|2018-01-16|2019-05-28|Skylo Technologies Inc.|Devices and methods for specialized machine-to-machine communication transmission network modes via edge node capabilities| US11163434B2|2019-01-24|2021-11-02|Ademco Inc.|Systems and methods for using augmenting reality to control a connected home system|
法律状态:
2019-05-03| FG2A| Definitive protection|Ref document number: 2646632 Country of ref document: ES Kind code of ref document: B2 Effective date: 20190503 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201462059410P| true| 2014-10-03|2014-10-03| US62/059,410|2014-10-03| US14/857,900|2015-09-18| US14/857,900|US10592306B2|2014-10-03|2015-09-18|Method and apparatus for resource balancing in an automation and alarm architecture| PCT/CA2015/051000|WO2016049778A1|2014-10-03|2015-10-02|Method and apparatus for resource balancing in an automation and alarm architecture| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|